开放域对话系统旨在以开放式的方式通过自然语言文本与人类互动。但是,广泛成功的神经网络可能对对话系统无法正常工作,因为它们倾向于产生通用响应。在这项工作中,我们提出了一个相等大小的艰难期望 - 最大化(EQHARD-EM)算法来训练多样化对话生成的多次模型。我们的算法以艰苦的方式将样品分配给解码器,并强加了等同的约束,以确保所有解码器都经过良好的训练。我们提供详细的理论分析以证明我们的方法是合理的。此外,对两个大规模开放域对话数据集进行了实验,验证了我们的eqhard-em算法是否会产生高质量的不同响应。
translated by 谷歌翻译
本文提出了一种新的方法,可以通过蒙特卡洛树搜索来控制象征性音乐的情感。我们使用蒙特卡洛树搜索作为一种解码机制来指导语言模型学到的概率分布朝着给定的情感。在解码过程的每个步骤中,我们都会使用树木(Puct)的预测指标上的置信度来搜索分别由情绪分类器和歧视器给出的情感和质量平均值的序列。我们将语言模型用作管道的政策,并将情感分类器和歧视器的组合作为其价值功能。为了解码一段音乐中的下一个令牌,我们从搜索过程中创建的节点访问的分布中进行采样。我们使用直接从生成的样品计算的一组客观指标来评估生成样品相对于人类组成的碎片的质量。我们还进行了一项用户研究,以评估人类受试者如何看待生成的样品的质量和情感。我们将派斗与随机双目标梁搜索(SBB)和条件采样(CS)进行了比较。结果表明,在音乐质量和情感的几乎所有指标中,Puct的表现都优于SBB和CS。
translated by 谷歌翻译
数据到文本生成系统旨在基于输入数据生成文本描述(通常以表格形式表示)。典型系统使用巨大的训练样本来学习表和文本之间的对应关系。然而,大型训练套装昂贵,可以获得这些方法在现实世界方案中的适用性。在这项工作中,我们专注于几次数据到文本生成。我们观察到,虽然微调预训练的语言模型可能会产生合理的句子,但它们在几次拍摄设置中遭受了低语义覆盖问题。换句话说,生成的文本中的重要输入时隙往往丢失。为此,我们提出了一种搜索和学习方法,可以利用预训练的语言模型,而是插入丢失的插槽以提高语义覆盖。我们根据搜索结果进一步微调我们的系统,以平滑搜索噪声,在很大程度上产生更好的质量文本并提高推理效率。实验表明,我们的模型在E2E和Wikibio数据集上实现了高性能。特别是,我们在E2E上覆盖了98.35%的输入槽,很大程度上减轻了低覆盖问题。
translated by 谷歌翻译
现有图形神经网络(GNNS)很大程度上依赖于节点嵌入品,其表示节点作为其标识,类型或内容的矢量。但是,具有未分配的节点的图表广泛存在于现实世界中的应用程序(例如,匿名社交网络)。以前的GNN可以将随机标签分配给节点(将伪影介绍给GNN)或分配给所有节点的一个嵌入(这不能明确区分一个节点)。此外,当这些GNN应用于未分配的节点分类问题时,它们具有不需要的标准性属性,其基本上无法以多种可能的输出来解决数据。在本文中,我们分析了节点分类问题现有方法的限制。灵感来自我们的分析,我们提出了一种推广的标准性质和优先标记技术,满足所需的属性渐近。实验结果表明,我们在几种未分配的节点分类任务中实现了高性能。
translated by 谷歌翻译
Semi-supervised learning (SSL) has made significant strides in the field of remote sensing. Finding a large number of labeled datasets for SSL methods is uncommon, and manually labeling datasets is expensive and time-consuming. Furthermore, accurately identifying remote sensing satellite images is more complicated than it is for conventional images. Class-imbalanced datasets are another prevalent phenomenon, and models trained on these become biased towards the majority classes. This becomes a critical issue with an SSL model's subpar performance. We aim to address the issue of labeling unlabeled data and also solve the model bias problem due to imbalanced datasets while achieving better accuracy. To accomplish this, we create "artificial" labels and train a model to have reasonable accuracy. We iteratively redistribute the classes through resampling using a distribution alignment technique. We use a variety of class imbalanced satellite image datasets: EuroSAT, UCM, and WHU-RS19. On UCM balanced dataset, our method outperforms previous methods MSMatch and FixMatch by 1.21% and 0.6%, respectively. For imbalanced EuroSAT, our method outperforms MSMatch and FixMatch by 1.08% and 1%, respectively. Our approach significantly lessens the requirement for labeled data, consistently outperforms alternative approaches, and resolves the issue of model bias caused by class imbalance in datasets.
translated by 谷歌翻译
Lack of factual correctness is an issue that still plagues state-of-the-art summarization systems despite their impressive progress on generating seemingly fluent summaries. In this paper, we show that factual inconsistency can be caused by irrelevant parts of the input text, which act as confounders. To that end, we leverage information-theoretic measures of causal effects to quantify the amount of confounding and precisely quantify how they affect the summarization performance. Based on insights derived from our theoretical results, we design a simple multi-task model to control such confounding by leveraging human-annotated relevant sentences when available. Crucially, we give a principled characterization of data distributions where such confounding can be large thereby necessitating the use of human annotated relevant sentences to generate factual summaries. Our approach improves faithfulness scores by 20\% over strong baselines on AnswerSumm \citep{fabbri2021answersumm}, a conversation summarization dataset where lack of faithfulness is a significant issue due to the subjective nature of the task. Our best method achieves the highest faithfulness score while also achieving state-of-the-art results on standard metrics like ROUGE and METEOR. We corroborate these improvements through human evaluation.
translated by 谷歌翻译
Image token removal is an efficient augmentation strategy for reducing the cost of computing image features. However, this efficient augmentation strategy has been found to adversely affect the accuracy of CLIP-based training. We hypothesize that removing a large portion of image tokens may improperly discard the semantic content associated with a given text description, thus constituting an incorrect pairing target in CLIP training. To address this issue, we propose an attentive token removal approach for CLIP training, which retains tokens with a high semantic correlation to the text description. The correlation scores are computed in an online fashion using the EMA version of the visual encoder. Our experiments show that the proposed attentive masking approach performs better than the previous method of random token removal for CLIP training. The approach also makes it efficient to apply multiple augmentation views to the image, as well as introducing instance contrastive learning tasks between these views into the CLIP framework. Compared to other CLIP improvements that combine different pre-training targets such as SLIP and MaskCLIP, our method is not only more effective, but also much more efficient. Specifically, using ViT-B and YFCC-15M dataset, our approach achieves $43.9\%$ top-1 accuracy on ImageNet-1K zero-shot classification, as well as $62.7/42.1$ and $38.0/23.2$ I2T/T2I retrieval accuracy on Flickr30K and MS COCO, which are $+1.1\%$, $+5.5/+0.9$, and $+4.4/+1.3$ higher than the SLIP method, while being $2.30\times$ faster. An efficient version of our approach running $1.16\times$ faster than the plain CLIP model achieves significant gains of $+5.3\%$, $+11.3/+8.0$, and $+9.5/+4.9$ on these benchmarks.
translated by 谷歌翻译
Most recent head pose estimation (HPE) methods are dominated by the Euler angle representation. To avoid its inherent ambiguity problem of rotation labels, alternative quaternion-based and vector-based representations are introduced. However, they both are not visually intuitive, and often derived from equivocal Euler angle labels. In this paper, we present a novel single-stage keypoint-based method via an {\it intuitive} and {\it unconstrained} 2D cube representation for joint head detection and pose estimation. The 2D cube is an orthogonal projection of the 3D regular hexahedron label roughly surrounding one head, and itself contains the head location. It can reflect the head orientation straightforwardly and unambiguously in any rotation angle. Unlike the general 6-DoF object pose estimation, our 2D cube ignores the 3-DoF of head size but retains the 3-DoF of head pose. Based on the prior of equal side length, we can effortlessly obtain the closed-form solution of Euler angles from predicted 2D head cube instead of applying the error-prone PnP algorithm. In experiments, our proposed method achieves comparable results with other representative methods on the public AFLW2000 and BIWI datasets. Besides, a novel test on the CMU panoptic dataset shows that our method can be seamlessly adapted to the unconstrained full-view HPE task without modification.
translated by 谷歌翻译
Using a Bayesian network to analyze the causal relationship between nodes is a hot spot. The existing network learning algorithms are mainly constraint-based and score-based network generation methods. The constraint-based method is mainly the application of conditional independence (CI) tests, but the inaccuracy of CI tests in the case of high dimensionality and small samples has always been a problem for the constraint-based method. The score-based method uses the scoring function and search strategy to find the optimal candidate network structure, but the search space increases too much with the increase of the number of nodes, and the learning efficiency is very low. This paper presents a new hybrid algorithm, MCME (multiple compound memory erasing). This method retains the advantages of the first two methods, solves the shortcomings of the above CI tests, and makes innovations in the scoring function in the direction discrimination stage. A large number of experiments show that MCME has better or similar performance than some existing algorithms.
translated by 谷歌翻译
While Named Entity Recognition (NER) is a widely studied task, making inferences of entities with only a few labeled data has been challenging, especially for entities with nested structures. Unlike flat entities, entities and their nested entities are more likely to have similar semantic feature representations, drastically increasing difficulties in classifying different entity categories in the few-shot setting. Although prior work has briefly discussed nested structures in the context of few-shot learning, to our best knowledge, this paper is the first one specifically dedicated to studying the few-shot nested NER task. Leveraging contextual dependency to distinguish nested entities, we propose a Biaffine-based Contrastive Learning (BCL) framework. We first design a Biaffine span representation module for learning the contextual span dependency representation for each entity span rather than only learning its semantic representation. We then merge these two representations by the residual connection to distinguish nested entities. Finally, we build a contrastive learning framework to adjust the representation distribution for larger margin boundaries and more generalized domain transfer learning ability. We conducted experimental studies on three English, German, and Russian nested NER datasets. The results show that the BCL outperformed three baseline models on the 1-shot and 5-shot tasks in terms of F1 score.
translated by 谷歌翻译